POV-Ray : Newsgroups : povray.programming : speed up povray by recompiling for graphics card : Re: speed up povray by recompiling for graphics card Server Time
17 May 2024 03:31:28 EDT (-0400)
  Re: speed up povray by recompiling for graphics card  
From: Tom York
Date: 21 Jun 2006 19:05:01
Message: <web.4499d00dbda360d27d55e4a40@news.povray.org>
Patrick Elliott <sel### [at] rraznet> wrote:
> In article <web.44985e8dbda360d27d55e4a40@news.povray.org>,
> Yeah. For what I want that might be usable, though it only adds a layer
> of additional complication, given I don't yet understand the math, never
> mind the code to use the GPU. lol

Hm, well, that webpage is probably a place to start :)

> Well, a lot of stuff you have no choice. Even Isosurfaces have some
> flaws, like not being able to self bound in a way that would knock off
> bits that stick out where they shouldn't be.

Isosurfaces are one of those things that I've never had the patience for,
although from some perspectives they're a way to compress enormous amounts
of data.

> If even 20% of something in a game
> was possible using more simplistic primitives, then that is 20% of the
> objects you don't need to build out of triangles. This means more space
> on the game disc for the "game" and less bandwidth for all the stuff you
> have to feed to a player for an online one.

I don't write games, but I hear from friends who have that bottlenecks in a
combined CPU/GPU rendering system can be in surprising places, like in
switching materials, rather more than actually getting those things onto
the card's memory in the first place.

Looking at the sort of models that games involve, about the most likely
substitute as a primitive would be variations on bicubic patches - I can't
imagine most primitives being worth the effort. I think the point is very
weak as far as storage goes, it's far cheaper than CPU in my opinion, but
bandwidth is, yes, expensive.

The point about bandwidth has been examined in various interesting ways. I
don't know if you've seen the .kkreiger first-person shoot-em-up (I think
that's its name). It's about 100kb in size and uses clever compression
techniques (sort of procedural, I suppose) to pack in what's alleged to be
several gig of data. It spends about 2 minutes uncompressing chunks of that
when you load it up.

There was also a PC game called Sacrifice which was alleged to store special
effects meshes as procedures. One use of, what I suppose you could call this
procedural geometry is that you could perhaps tie it to machine speed -
faster machines take the procedural description and tesselate it into more
renderable primitives (triangles) than slower machines, the appropriate
level being determined by the user at set-up time.

> Even if all you are
> producing is still images, you are still looking at a "huge" data spike
> every time they need to transfer a new model, or worse, an entirely new
> room.

I don't think that's particularly fatal (I assume you're talking about some
sort of still renderer here). Most renderers for that, scanline, raytracers
or whatever, have a big set up hit. POVRay has the parsing phase and image
texture loading, for example. Specifically in regard to the GPU, well, it's
typically got the widest bandwidth of all in a typical PC, IIRC. It might
even be faster to send data to a GPU renderer than it would to fetch it
from main memory to the CPU, I'm not sure.

> Real time lighting changes, etc. have to be done in the game
> engine, not generated on the other end, because if you don't cache the
> files to produce it, you are looking at that same data hit every time
> you enter the room.

Again, I'm not sure that this is as significant as it appears.

> Yeah, if you are making a major motion picture and have lots of money to
> buy a mess of computers, the best available software tools and people
> that know how to use them, and you are planning on having the final
> project come up 5 years from now, great.

Well, let's take the opposite end. Logos for adverts and TV special effects
shots (like in scifi series) are typically done on a relative shoestring
(i.e. less than it would cost to build a physical model) and need to be
done last week. They don't raytrace either.

They will eventually, though.

>  If you want it to update more
> or less real time, make changes on the fly and people are going to be
> getting the content over the internet (possibly not all on the "best"
> high speed connections), then from that perspective the current
> architecture is not practical.

Well, broadband connections are spreading very rapidly, I think. My only
experience of low-bandwidth "over-the-network" 3D is VRML-style interactive
stuff, and to be honest, back when I saw it my impression was that it was
both academic and a bit pointless at the sort of quality you'd get on a
low-bandwidth connection. One major producer of PC games (and many
sharewhere producers) has moved to distributing their products over the
internet as a first, rather than second choice. The problems the majority
of internet-based games face are more to do with latency and reliability
than raw bandwidth, I think.

> If anything its amazing some things like
> Uru Live or the shard projects branching off of it, work at all over the
> internet, even with high speed, same with Second Life or other "true" 3D
> worlds. Heck, the isometric ones only really work because they require
> installing patches with the new models and textures to extend the game.
> If you couldn't buy a disk and install it, maybe 50% of the players
> would never go past the first version.

I'm not sure the problem is with geometry. Game geometry, even mesh
geometry, is pretty lightweight. It's all those textures that make the
difference, and I don't think changing the type of primitive you use gets
you away from the need to detail surfaces. Procedurals are one alternative
but they cost in CPU time.

> Anyway, that is the standpoint I am coming from. Not, "how do I make a
> photo?", but, "how do I do this without shoving 4GB of data onto
> someone's computer, then cramming as much stuff as I can down the pipe
> anyway when they play the game?" The later is why new content is not
> exactly a staple for graphical online games. lol

I think it's becoming more important. I seem to be regularly updated with
content for Half Life and derivatives, and I suppose other games will only
follow suit. They've been distributing upgrades and bug-fix patches this
way for years. Also, you could use those procedural compression tricks
again; uncompress the data into meshes on the user's computer after
downloading.

>
> Now, if using the scanline system "could" allow approximation of the
> primitives at a decent speed and still make things faster.. That could
> help too. But its still not going to be doing "some" things very well
> without a lot of cludging, like reflecting objects not "in view", etc.

I don't know how important such effects are. You can render a reflective
sphere primitive with a raytracer and be very memory-efficient and
incidentally a perfect reflection, but it may well look less "real" than a
few hundred k of relatively memory inefficient mesh with a few MB of
displacement map and textures on it. Reflection's an interesting one;
blurred reflection is cheap in scanline, and can look more realistic to the
eye than sharp raytraced reflections, even though the latter is accurate and
the former isn't.

> It is an issue made all the more complicated by the fact that if I
> wanted to really do something, most of the books that provided clear
> information on how most of the raytrace stuff where last published...
> 15-20 years ago. :( Now, its sort of a "turtles all the way down" world,
> where everyone "assumes" you are just going to use blender to make a
> model, then throw it at a GPU. Quite annoying and I am way to lazy to
> spend weeks trying to find the information online (all the while dodging
> site that refer back to GPUs) or probably even longer trying to figure
> out what the code it POV-Ray, which I can't actually use anyway the way
> I want, works. Oh well...

I'm not sure I understand. Are you talking about writing a graphics engine
for a game using raytracing instead of scanline methods, or a
high-quality/non-game rendering engine?

One thing I'd recommend is to check out the work behind

http://www.openrt.de/

I remember the papers behind it being spectacular, frankly.

> But yeah, for some things GPUs are practical. If you have a) bandwidth,
> b) storage space, c) money and d) a lot of time. If a and b are limited
> and your intent is to avoid a lot of c and d, especially if d is the one
> thing you want to avoid needing, you are screwed when using GPUs. ;)

I think most of these are answered above. Certainly I'd argue that
raytracing is far more time-intensive for game applications. But I don't
understand what application you have in mind.

Tom


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.